Random intercepts
Humboldt-Universität zu Berlin
2024-01-12
Today we will learn…
lme4 and lmerTest have a function lmer()
lme4 versiondf_biondo <-
read_csv(here("data", "Biondo.Soilemezidi.Mancini_dataset_ET.csv"),
locale = locale(encoding = "Latin1") ## for special characters in Spanish
) |>
clean_names() |>
mutate(gramm = ifelse(gramm == "0", "ungramm", "gramm")) |>
mutate_if(is.character,as_factor) |> # all character variables as factors
filter(adv_type == "Deic") |>
droplevels()And take a look at the data:
# A tibble: 6 × 13
sj item adv_type adv_t verb_t gramm roi label fp gp tt ri
<fct> <dbl> <fct> <fct> <fct> <fct> <dbl> <fct> <dbl> <dbl> <dbl> <dbl>
1 1 54 Deic Past Past gramm 1 En la c… 1173 1173 1173 0
2 1 54 Deic Past Past gramm 2 ayer te… 474 474 474 0
3 1 54 Deic Past Past gramm 3 los car… 910 910 910 0
4 1 54 Deic Past Past gramm 4 encarga… 1027 1027 1027 0
5 1 54 Deic Past Past gramm 5 muchas … 521 521 521 0
6 1 54 Deic Past Past gramm 6 al prov… 1029 1029 1029 0
# ℹ 1 more variable: ro <dbl>
grammaticalityFigure 1: By-participant boxplot of first-pass RTs at the verb region with overall median FP value in red
Figure 2: By-item boxplot of first-pass RTs at the verb region with overall median FP value in red
Figure 3: Predicted by-participant varying intercepts (A) and slopes (B) with overall effects in black
Figure 4: By-item varying intercepts (A) and slopes (B) with overall effects in black
fit_lmm_fp_sj, which contains…lmer()): log first-pass RTs as a function of our fixed effects, plus…1) by-participant (|sj)filter(roi == 4)summary() function, just as we did with (g)lm()Linear mixed model fit by REML ['lmerMod']
Formula: log(fp) ~ verb_t * gramm + (1 | sj)
Data: df_biondo
Subset: roi == 4
REML criterion at convergence: 4479.1
Scaled residuals:
Min 1Q Median 3Q Max
-4.0560 -0.6427 -0.0419 0.6168 4.0901
Random effects:
Groups Name Variance Std.Dev.
sj (Intercept) 0.06573 0.2564
Residual 0.18030 0.4246
Number of obs: 3795, groups: sj, 60
Fixed effects:
Estimate Std. Error t value
(Intercept) 5.957102 0.033809 176.199
verb_t1 0.062209 0.013787 4.512
gramm1 0.003466 0.013787 0.251
verb_t1:gramm1 -0.015741 0.027573 -0.571
Correlation of Fixed Effects:
(Intr) vrb_t1 gramm1
verb_t1 0.000
gramm1 0.000 -0.002
vrb_t1:grm1 0.000 0.002 0.000
1Linear mixed model fit by REML ['lmerMod'].
2Formula: log(fp) ~ verb_t * gramm + (1 | sj)
3 Data: df_biondo
4 Subset: roi == 4
5REML criterion at convergence: 4479.1
6Scaled residuals:
Min 1Q Median 3Q Max
-4.0560 -0.6427 -0.0419 0.6168 4.0901 lmerMod)
1Fixed effects:
2 Estimate Std. Error t value
3(Intercept) 5.957102 0.033809 176.199
4verb_t1 0.062209 0.013787 4.512
5gramm1 0.003466 0.013787 0.251
6verb_t1:gramm1 -0.015741 0.027573 -0.571
7Correlation of Fixed Effects:
(Intr) vrb_t1 gramm1
verb_t1 0.000
gramm1 0.000 -0.002
vrb_t1:grm1 0.000 0.002 0.0001Random effects:
2 Groups Name Variance Std.Dev.
3 sj (Intercept) 0.06573 0.2564
4 Residual 0.18030 0.4246
5Number of obs: 3795, groups: sj, 60VarCorr() function from lme4
Groups Name Variance Std.Dev.
sj (Intercept) 0.065731 0.25638
Residual 0.180305 0.42462
Groups Name Variance Std.Dev.
sj (Intercept) 0.065731 0.25638
Residual 0.180305 0.42462
lmerTest::lmer()lmer() between lme4 and lmerTest, and we set lme4 to be our default for this functionlmerTest::lmer()lmerTest::, which specifies which package to retreive the lmer() function fromLinear mixed model fit by REML. t-tests use Satterthwaite's method [
lmerModLmerTest]
Formula: log(fp) ~ verb_t * gramm + (1 | sj)
Data: df_biondo
Subset: roi == 4
REML criterion at convergence: 4479.1
Scaled residuals:
Min 1Q Median 3Q Max
-4.0560 -0.6427 -0.0419 0.6168 4.0901
Random effects:
Groups Name Variance Std.Dev.
sj (Intercept) 0.06573 0.2564
Residual 0.18030 0.4246
Number of obs: 3795, groups: sj, 60
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 5.957102 0.033809 58.991899 176.199 < 0.0000000000000002
verb_t1 0.062209 0.013787 3732.065822 4.512 0.00000662
gramm1 0.003466 0.013787 3732.032139 0.251 0.802
verb_t1:gramm1 -0.015741 0.027573 3732.037124 -0.571 0.568
(Intercept) ***
verb_t1 ***
gramm1
verb_t1:gramm1
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) vrb_t1 gramm1
verb_t1 0.000
gramm1 0.000 -0.002
vrb_t1:grm1 0.000 0.002 0.000
1Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
...
Fixed effects:
2 Estimate Std. Error df t value Pr(>|t|)
(Intercept) 5.957102 0.033809 58.991899 176.199 < 0.0000000000000002 ***
verb_t1 0.062209 0.013787 3732.065822 4.512 0.00000662 ***
gramm1 0.003466 0.013787 3732.032139 0.251 0.802
verb_t1:gramm1 -0.015741 0.027573 3732.037124 -0.571 0.568
...Satterthwaite's method and object class (lmerModLmerTest)
df (degrees of freedom) and p-values (Pr(>|t|))
lme4::lmer() doesn’t provide degrees of freedom or p-values
lmerTest uses the Satterthwaite method, which is fine for our purposeslme4::lmer()lm() and lmer()| term | estimate | std.error | statistic | p.value |
|---|---|---|---|---|
| (Intercept) | 5.957 | 0.008 | 741.568 | 0.0000000 |
| verb_t1 | 0.061 | 0.016 | 3.809 | 0.0001415 |
| gramm1 | 0.003 | 0.016 | 0.193 | 0.8469596 |
| verb_t1:gramm1 | -0.015 | 0.032 | -0.474 | 0.6352122 |
| term | estimate | std.error | statistic | df | p.value |
|---|---|---|---|---|---|
| (Intercept) | 5.957 | 0.034 | 176.199 | 59 | 0.0000000 |
| verb_t1 | 0.062 | 0.014 | 4.512 | 3732 | 0.0000066 |
| gramm1 | 0.003 | 0.014 | 0.251 | 3732 | 0.8015026 |
| verb_t1:gramm1 | -0.016 | 0.028 | -0.571 | 3732 | 0.5681183 |
this tells us that our inclusion of by-participant varying intercepts accounts for some of the variance in the model that was not accounted for in fixed-effects-only model
are there any other possible sources of variance that we haven’t taken into account?
Linear mixed model fit by REML. t-tests use Satterthwaite's method [
lmerModLmerTest]
Formula: log(fp) ~ verb_t * gramm + (1 | sj) + (1 | item)
Data: df_biondo
Subset: roi == 4
REML criterion at convergence: 4220.3
Scaled residuals:
Min 1Q Median 3Q Max
-4.1568 -0.6169 -0.0257 0.6006 4.0422
Random effects:
Groups Name Variance Std.Dev.
item (Intercept) 0.01940 0.1393
sj (Intercept) 0.06654 0.2580
Residual 0.16089 0.4011
Number of obs: 3795, groups: item, 96; sj, 60
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 5.956404 0.036790 79.200811 161.903 < 0.0000000000000002
verb_t1 0.061892 0.013025 3637.133151 4.752 0.00000209
gramm1 0.003212 0.013025 3637.183379 0.247 0.805
verb_t1:gramm1 -0.014316 0.026049 3637.102346 -0.550 0.583
(Intercept) ***
verb_t1 ***
gramm1
verb_t1:gramm1
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) vrb_t1 gramm1
verb_t1 0.000
gramm1 0.000 -0.002
vrb_t1:grm1 0.000 0.002 0.000
| group | term | estimate |
|---|---|---|
| sj | sd__(Intercept) | 0.2563809 |
| Residual | sd__Observation | 0.4246230 |
| group | term | estimate |
|---|---|---|
| item | sd__(Intercept) | 0.1392928 |
| sj | sd__(Intercept) | 0.2579514 |
| Residual | sd__Observation | 0.4011073 |
fit_lmm_fp_sj_item (0.4), because by-item intercepts account for some of the variance that was unaccounted for in fit_lmm_fp_sj (0.42)| term | estimate | std.error | statistic | df | p.value |
|---|---|---|---|---|---|
| (Intercept) | 5.957 | 0.034 | 176.199 | 59 | 0.0000000 |
| verb_t1 | 0.062 | 0.014 | 4.512 | 3732 | 0.0000066 |
| gramm1 | 0.003 | 0.014 | 0.251 | 3732 | 0.8015026 |
| verb_t1:gramm1 | -0.016 | 0.028 | -0.571 | 3732 | 0.5681183 |
| term | estimate | std.error | statistic | df | p.value |
|---|---|---|---|---|---|
| (Intercept) | 5.956 | 0.037 | 161.903 | 79 | 0.0000000 |
| verb_t1 | 0.062 | 0.013 | 4.752 | 3637 | 0.0000021 |
| gramm1 | 0.003 | 0.013 | 0.247 | 3637 | 0.8052568 |
| verb_t1:gramm1 | -0.014 | 0.026 | -0.550 | 3637 | 0.5826522 |
sjPlot::plot_model(fit_lm_fp, type = "int") +
geom_line(position = position_dodge(.1)) +
ylim(340,440) +
labs(title = "fit_lm_fp") +
sjPlot::plot_model(fit_lmm_fp_sj, type = "int") +
geom_line(position = position_dodge(.1)) +
ylim(340,440) +
labs(title = "fit_lmm_fp_sj") +
sjPlot::plot_model(fit_lmm_fp_sj_item, type = "int") +
geom_line(position = position_dodge(.1)) +
ylim(340,440) +
labs(title = "fit_lmm_fp_sj_item") +
plot_layout(guides = "collect")Figure 6: Comparison of predicted estimates and 95% confidence intervals for the three models
anova() function to compare model fitData: df_biondo
Subset: roi == 4
Models:
fit_lmm_fp_sj: log(fp) ~ verb_t * gramm + (1 | sj)
fit_lmm_fp_sj_item: log(fp) ~ verb_t * gramm + (1 | sj) + (1 | item)
npar AIC BIC logLik deviance Chisq Df
fit_lmm_fp_sj 6 4467.3 4504.8 -2227.7 4455.3
fit_lmm_fp_sj_item 7 4210.3 4254.0 -2098.2 4196.3 258.98 1
Pr(>Chisq)
fit_lmm_fp_sj
fit_lmm_fp_sj_item < 0.00000000000000022 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coef() to extract fixed effect estimates from lm objects (Intercept) verb_t1 gramm1 verb_t1:gramm1
5.957251870 0.061204153 0.003101061 -0.015245374
lmer objects we need fixef() (Intercept) verb_t1 gramm1 verb_t1:gramm1
5.95640363 0.06189237 0.00321152 -0.01431578
$coefficients to the model summarycoef() behaves very differently with lmer objects, extracting the random effects estimates per level item (Intercept) verb_t1 gramm1 verb_t1:gramm1
1 1 6.022184 0.06189237 0.00321152 -0.01431578
2 2 5.761268 0.06189237 0.00321152 -0.01431578
3 3 5.854873 0.06189237 0.00321152 -0.01431578
4 4 6.056862 0.06189237 0.00321152 -0.01431578
5 5 6.138213 0.06189237 0.00321152 -0.01431578
6 6 6.331058 0.06189237 0.00321152 -0.01431578
list object, with one data frame for item and one for sj
head() is for presentation purposes): item (Intercept) verb_t1 gramm1 verb_t1:gramm1
1 1 6.022184 0.06189237 0.00321152 -0.01431578
2 2 5.761268 0.06189237 0.00321152 -0.01431578
3 3 5.854873 0.06189237 0.00321152 -0.01431578
4 4 6.056862 0.06189237 0.00321152 -0.01431578
5 5 6.138213 0.06189237 0.00321152 -0.01431578
6 6 6.331058 0.06189237 0.00321152 -0.01431578
sj (Intercept) verb_t1 gramm1 verb_t1:gramm1
1 1 6.401777 0.06189237 0.00321152 -0.01431578
2 2 5.794179 0.06189237 0.00321152 -0.01431578
3 07 5.869627 0.06189237 0.00321152 -0.01431578
4 09 5.782527 0.06189237 0.00321152 -0.01431578
5 10 6.621081 0.06189237 0.00321152 -0.01431578
6 11 5.913712 0.06189237 0.00321152 -0.01431578
verb_t1, gramm1, or verb_t1:gramm1?ranef() function provides the deviance from the model intercept and each random intercept estimate
list with a one element per grouping factor$item
(Intercept)
1 0.065780608
2 -0.195135717
3 -0.101530802
4 0.100458122
5 0.181809783
6 0.374654251
7 0.092819196
8 0.136954752
9 0.058102873
10 -0.054265683
11 -0.149873360
12 0.110751479
13 0.147096084
14 0.127958914
15 0.057606192
16 -0.081076541
17 0.125828603
18 -0.073509315
19 -0.012746330
20 0.110139903
21 -0.155506252
22 0.126398878
23 0.166876070
24 -0.034901551
25 0.146486591
26 0.074730265
27 0.088381259
28 -0.092678849
29 0.014411435
30 0.066763872
31 -0.038994027
32 -0.120446386
33 -0.194224589
34 -0.072322285
35 -0.084322521
36 -0.103494886
38 0.118984111
39 0.091933443
40 -0.086216865
41 0.134573606
42 -0.117043412
43 -0.062584500
44 -0.001550553
46 0.008950192
47 -0.099435593
48 -0.107745509
49 -0.062632428
51 -0.023401507
52 0.206710720
53 -0.016527981
54 -0.032560697
55 -0.023730903
56 0.046093704
57 0.217142741
58 0.023060958
59 -0.217157054
60 0.059161953
61 0.148130135
62 0.076164596
63 -0.162879330
64 0.003286442
66 -0.145269266
67 0.038320168
68 0.144997093
69 -0.011825617
70 0.141234735
72 -0.019280976
73 0.243151378
74 0.005062608
75 -0.078003084
76 0.030885201
77 -0.105347877
78 0.267962295
79 0.049474689
80 0.012452861
81 -0.126197903
82 -0.270275547
83 -0.231166335
84 -0.148988678
85 -0.074619155
86 -0.316661058
87 -0.021441852
88 0.004467788
89 -0.068689368
90 0.168752654
91 -0.130311515
92 0.181189404
93 -0.113007279
94 -0.038028968
95 0.018891686
96 -0.010844951
97 -0.160693789
98 -0.189160313
99 -0.046282799
100 0.038520282
101 0.031027185
$sj
(Intercept)
1 0.4453736686
2 -0.1622245920
07 -0.0867769204
09 -0.1738770087
10 0.6646773896
11 -0.0426912397
12 0.1966276739
14 -0.2534130428
15 0.0744365289
16 0.3102703155
17 -0.4416672585
18 0.3483163760
20 -0.0004747722
21 -0.3905578006
22 0.1588478248
23 0.2750736081
24 -0.0164601317
26 0.0545049227
27 0.1681915401
28 0.3829174064
29 -0.0724729416
30 -0.1307018626
31 0.5591486944
32 -0.2297591961
33 -0.1058829882
34 -0.1929520704
25 0.2651328954
73 0.1010490787
74 0.1222262133
35 -0.3466865197
36 0.2458666126
37 -0.2586775333
38 -0.1170898527
39 -0.0707477418
40 -0.2595291388
41 -0.1337344231
42 0.1285752340
43 -0.3904903423
46 -0.5196576089
47 0.1586324543
48 0.2627372846
49 -0.1194323811
50 -0.3470873902
51 0.2998361957
52 0.1196091237
53 -0.1241739663
54 -0.1457289205
55 0.1724245610
56 -0.1790151326
57 -0.2663456249
58 0.0161788674
59 0.0070594181
60 0.2180027256
61 -0.0234205903
62 0.2711544584
63 -0.2167262452
64 -0.3457197069
65 0.0165792343
66 0.0426515896
67 0.0780730481
with conditional variances for "item" "sj"
ranef()$grouping_factor or pluck("grouping_factor") selects the relevant grouping factor (Intercept)
1 0.44537367
2 -0.16222459
07 -0.08677692
09 -0.17387701
10 0.66467739
11 -0.04269124
ranef() (sj_dev in Table 7) equal the difference (difference) between the model intercept (model_intercept) and the by-participant random intercept estimates (sj_est)| sj | sj_est | sj_dev | est_minus_dev | model_intercept | est_minus_intercept |
|---|---|---|---|---|---|
| 1 | 6.402 | 0.445 | 5.956 | 5.956 | 0.445 |
| 2 | 5.794 | -0.162 | 5.956 | 5.956 | -0.162 |
| 07 | 5.870 | -0.087 | 5.956 | 5.956 | -0.087 |
| 09 | 5.783 | -0.174 | 5.956 | 5.956 | -0.174 |
| 10 | 6.621 | 0.665 | 5.956 | 5.956 | 0.665 |
| 11 | 5.914 | -0.043 | 5.956 | 5.956 | -0.043 |
according to Sonderegger (2023) (p. 297), we should report:
We conducted the analysis by fitting
linear mixed-effect modelsto our data, using the R package lme4 (Bates et al., 2014). We included Time Reference (past, future), and Verb Match (match, mismatch) asfixed-effect factors[…] by adoptingsum contrast coding(Schad et al., 2020): past and match conditions werecoded as –.5. while future and mismatch conditions werecoded as .5. […] Moreover, we includedcrossed random interceptsand random slopes for all fixed-effect parametersfor subject and itemgrouping factors (Barr et al., 2013) in all models. […] Logit mixed-effect models were employed (Jaeger, 2008) for the analysis of the probability of regression measure. […] P-values were derived by using the lmerTest package (Kuznetsova et al., 2017).
— Biondo et al. (2022), p. 9
P-values for individual predictors were computed using
lmerTest, with theSatterthwaiteoption for denominator degrees of freedom for F statistics.
— Troyer & Kutas (2020), p. 9
citation("lmerTest") in the ConsoleA main effect of tense was found in first-pass reading times at the verb region (Est = 0.062, t = 4.8, p < .001), with the future tense (M = 458ms, SD = 274ms) eliciting longer first-pass reading times than the past tense.
tidy(fit_lmm_fp_sj_item,
effects = "fixed") |>
as_tibble() |>
select(-effect) |>
mutate(p.value = format_pval(p.value),
across(c(estimate,std.error, statistic), round, 3),
df = round(df,1)) |>
mutate(term = fct_recode(term,
"Intercept" = "(Intercept)",
"Tense" = "verb_t1",
"Grammticality" = "gramm1",
"Tense x Gramm" = "verb_t1:gramm1"
)) |>
kable(
col.names = c("Coefficient", "$\\hat{\\beta}$", "SE", "t", "df", "p")) |>
kable_styling()| Coefficient | $\hat{\beta}$ | SE | t | df | p |
|---|---|---|---|---|---|
| Intercept | 5.956 | 0.037 | 161.903 | 79.2 | < .001 |
| Tense | 0.062 | 0.013 | 4.752 | 3637.1 | < .001 |
| Grammticality | 0.003 | 0.013 | 0.247 | 3637.2 | 0.805 |
| Tense x Gramm | -0.014 | 0.026 | -0.550 | 3637.1 | 0.583 |
| Group | Term | Variance | SD |
|---|---|---|---|
| item | (Intercept) | 0.019 | 0.139 |
| sj | (Intercept) | 0.067 | 0.258 |
| Residual | NA | 0.161 | 0.401 |
latticelattice packageFigure 7: By-participant varying slopes (lattice::dotplot(res(model)))
broom.mixedtidy() from the broom.mixed package + ggplot() (Figure 8 A)coef() (Figure 8 B)fig_res_dev <-
broom.mixed::tidy(fit_lmm_fp_sj_item, effects = "ran_vals", conf.int = TRUE) |>
filter(group == "sj") |>
ggplot() +
aes(x = estimate, y = reorder(level, estimate)) +
labs(title = "By-participant varying intercepts with 95% CIs",
y = "Participant ID",
x = "Deviation from the intercept") +
geom_vline(xintercept = 0, colour = "red", linetype = "dashed") +
geom_point(colour = "blue") +
geom_errorbar(
aes(xmin = conf.low,
xmax = conf.high)
) +
scale_x_continuous(breaks = c(-0.5,0,0.5)) +
facet_grid(~term)
fig_res_est <- broom.mixed::tidy(fit_lmm_fp_sj_item, effects = "ran_vals", conf.int = TRUE) |>
filter(group == "sj") |>
# back-transform to ms
mutate(across(c(estimate,conf.low,conf.high),~.+fixef(fit_lmm_fp_sj_item)[1])) |>
# mutate(across(c(estimate,conf.low,conf.high),exp)) |>
# plot
ggplot() +
aes(x = estimate, y = reorder(level, estimate)) +
labs(title = "By-participant varying intercepts with 95% CIs",
y = "Participant ID",
x = "Estimate (log)") +
geom_vline(xintercept = fixef(fit_lmm_fp_sj)[1], colour = "red", linetype = "dashed") +
geom_point(colour = "blue") +
geom_errorbar(
aes(xmin = conf.low,
xmax = conf.high)
) +
scale_x_continuous(breaks = c(5.457102,5.957102 ,6.457102)) +
facet_grid(~term)
fig_res_dev + fig_res_est +
plot_annotation(tag_levels = "A")Figure 8: Back-transformed first-pass reading times (ms) at the verb region with 95% CIs
fig_res_est_ms <-
broom.mixed::tidy(fit_lmm_fp_sj_item, effects = "ran_vals", conf.int = TRUE) |>
filter(group == "sj") |>
# back-transform to ms
mutate(across(c(estimate,conf.low,conf.high),~.+fixef(fit_lmm_fp_sj_item)[1])) |>
mutate(across(c(estimate,conf.low,conf.high),exp)) |>
# plot
ggplot() +
aes(x = estimate, y = reorder(level, estimate)) +
labs(title = "By-participant varying intercepts with 95% CIs",
y = "Participant ID",
x = "Estimate (ms)") +
geom_vline(xintercept = exp(fixef(fit_lmm_fp_sj)[1]), colour = "red", linetype = "dashed") +
geom_point(colour = "blue") +
geom_errorbar(
aes(xmin = conf.low,
xmax = conf.high)
) +
scale_x_continuous(breaks = c(186.4884,386.4884, 586.4884, 786.4884)) +
facet_grid(~term)
fig_res_dev_ms <-
broom.mixed::tidy(fit_lmm_fp_sj_item, effects = "ran_vals", conf.int = TRUE) |>
filter(group == "sj") |>
# back-transform to ms
mutate(across(c(estimate,conf.low,conf.high),~.+fixef(fit_lmm_fp_sj_item)[1])) |>
mutate(across(c(estimate,conf.low,conf.high),exp)) |>
mutate(across(c(estimate,conf.low,conf.high),~.-exp(fixef(fit_lmm_fp_sj_item)[1]))) |>
# plot
ggplot() +
aes(x = estimate, y = reorder(level, estimate)) +
labs(title = "Deviances in by-participant varying intercepts with 95% CIs",
y = "Participant ID",
x = "Deviance (ms)") +
geom_vline(xintercept = 0, colour = "red", linetype = "dashed") +
geom_point(colour = "blue") +
geom_errorbar(
aes(xmin = conf.low,
xmax = conf.high)
) +
# scale_x_continuous(breaks = c(-0.5,0,0.5)) +
facet_grid(~term)
fig_res_est_ms + fig_res_dev_ms + plot_annotation(tag_levels = "A")Figure 9: By-participant estimates back-transformed to milliseconds
Today we learned…
| Term | Definition | Equation/Code |
|---|---|---|
| linear mixed (effects) model | NA | NA |
lm() function) to log-transformed total reading times (tt) at the adverb region (roi == 2), with adverb time reference (adv_t) and gramm (gramm) and their interaction as fixed effects and by-participant and by-item varying intercepts. Use sum contrast coding (Past and gramm = -0.5, Future and ungramm = +0.5). Save this model as fit_lmm_adv_tt.fit_lm_adv_tt and fit_lmm_adv_tt.Inspect the random effects for fit_lmm_adv_tt. Describe what you see.
Plot the random effects per participant and item.
Write up a description of your model as if for a publication (model formula, contrasts, random effects structure, packages/methods used).
Write up the results (coefficient estimates, etc.).